121 research outputs found

    Incremental Learning Using a Grow-and-Prune Paradigm with Efficient Neural Networks

    Full text link
    Deep neural networks (DNNs) have become a widely deployed model for numerous machine learning applications. However, their fixed architecture, substantial training cost, and significant model redundancy make it difficult to efficiently update them to accommodate previously unseen data. To solve these problems, we propose an incremental learning framework based on a grow-and-prune neural network synthesis paradigm. When new data arrive, the neural network first grows new connections based on the gradients to increase the network capacity to accommodate new data. Then, the framework iteratively prunes away connections based on the magnitude of weights to enhance network compactness, and hence recover efficiency. Finally, the model rests at a lightweight DNN that is both ready for inference and suitable for future grow-and-prune updates. The proposed framework improves accuracy, shrinks network size, and significantly reduces the additional training cost for incoming data compared to conventional approaches, such as training from scratch and network fine-tuning. For the LeNet-300-100 and LeNet-5 neural network architectures derived for the MNIST dataset, the framework reduces training cost by up to 64% (63%) and 67% (63%) compared to training from scratch (network fine-tuning), respectively. For the ResNet-18 architecture derived for the ImageNet dataset and DeepSpeech2 for the AN4 dataset, the corresponding training cost reductions against training from scratch (network fine-tunning) are 64% (60%) and 67% (62%), respectively. Our derived models contain fewer network parameters but achieve higher accuracy relative to conventional baselines

    PinMe: Tracking a Smartphone User around the World

    Full text link
    With the pervasive use of smartphones that sense, collect, and process valuable information about the environment, ensuring location privacy has become one of the most important concerns in the modern age. A few recent research studies discuss the feasibility of processing data gathered by a smartphone to locate the phone's owner, even when the user does not intend to share his location information, e.g., when the Global Positioning System (GPS) is off. Previous research efforts rely on at least one of the two following fundamental requirements, which significantly limit the ability of the adversary: (i) the attacker must accurately know either the user's initial location or the set of routes through which the user travels and/or (ii) the attacker must measure a set of features, e.g., the device's acceleration, for potential routes in advance and construct a training dataset. In this paper, we demonstrate that neither of the above-mentioned requirements is essential for compromising the user's location privacy. We describe PinMe, a novel user-location mechanism that exploits non-sensory/sensory data stored on the smartphone, e.g., the environment's air pressure, along with publicly-available auxiliary information, e.g., elevation maps, to estimate the user's location when all location services, e.g., GPS, are turned off.Comment: This is the preprint version: the paper has been published in IEEE Trans. Multi-Scale Computing Systems, DOI: 0.1109/TMSCS.2017.275146

    Study on the Tourism Industry Competitiveness of Nanyue Economic Zone

    Get PDF
    This paper analyzed the subjects of tourism development of Nanyue economic zone, such as production elements, demand status, related and supporting industries, enterprise, government and opportunities, and points out that the Nanyue economic zone tourism industry competitiveness support elements and restricting factors, and puts forward some countermeasures on how to improve the competitiveness of the industry of tourism Nanyue economic zone.Key words: Nanyue economic zone; Tourism industry; Competitiveness mode

    Fully Dynamic Inference with Deep Neural Networks

    Full text link
    Modern deep neural networks are powerful and widely applicable models that extract task-relevant information through multi-level abstraction. Their cross-domain success, however, is often achieved at the expense of computational cost, high memory bandwidth, and long inference latency, which prevents their deployment in resource-constrained and time-sensitive scenarios, such as edge-side inference and self-driving cars. While recently developed methods for creating efficient deep neural networks are making their real-world deployment more feasible by reducing model size, they do not fully exploit input properties on a per-instance basis to maximize computational efficiency and task accuracy. In particular, most existing methods typically use a one-size-fits-all approach that identically processes all inputs. Motivated by the fact that different images require different feature embeddings to be accurately classified, we propose a fully dynamic paradigm that imparts deep convolutional neural networks with hierarchical inference dynamics at the level of layers and individual convolutional filters/channels. Two compact networks, called Layer-Net (L-Net) and Channel-Net (C-Net), predict on a per-instance basis which layers or filters/channels are redundant and therefore should be skipped. L-Net and C-Net also learn how to scale retained computation outputs to maximize task accuracy. By integrating L-Net and C-Net into a joint design framework, called LC-Net, we consistently outperform state-of-the-art dynamic frameworks with respect to both efficiency and classification accuracy. On the CIFAR-10 dataset, LC-Net results in up to 11.9×\times fewer floating-point operations (FLOPs) and up to 3.3% higher accuracy compared to other dynamic inference methods. On the ImageNet dataset, LC-Net achieves up to 1.4×\times fewer FLOPs and up to 4.6% higher Top-1 accuracy than the other methods

    DiabDeep: Pervasive Diabetes Diagnosis based on Wearable Medical Sensors and Efficient Neural Networks

    Full text link
    Diabetes impacts the quality of life of millions of people. However, diabetes diagnosis is still an arduous process, given that the disease develops and gets treated outside the clinic. The emergence of wearable medical sensors (WMSs) and machine learning points to a way forward to address this challenge. WMSs enable a continuous mechanism to collect and analyze physiological signals. However, disease diagnosis based on WMS data and its effective deployment on resource-constrained edge devices remain challenging due to inefficient feature extraction and vast computation cost. In this work, we propose a framework called DiabDeep that combines efficient neural networks (called DiabNNs) with WMSs for pervasive diabetes diagnosis. DiabDeep bypasses the feature extraction stage and acts directly on WMS data. It enables both an (i) accurate inference on the server, e.g., a desktop, and (ii) efficient inference on an edge device, e.g., a smartphone, based on varying design goals and resource budgets. On the server, we stack sparsely connected layers to deliver high accuracy. On the edge, we use a hidden-layer long short-term memory based recurrent layer to cut down on computation and storage. At the core of DiabDeep lies a grow-and-prune training flow: it leverages gradient-based growth and magnitude-based pruning algorithms to learn both weights and connections for DiabNNs. We demonstrate the effectiveness of DiabDeep through analyzing data from 52 participants. For server (edge) side inference, we achieve a 96.3% (95.3%) accuracy in classifying diabetics against healthy individuals, and a 95.7% (94.6%) accuracy in distinguishing among type-1/type-2 diabetic, and healthy individuals. Against conventional baselines, DiabNNs achieve higher accuracy, while reducing the model size (FLOPs) by up to 454.5x (8.9x). Therefore, the system can be viewed as pervasive and efficient, yet very accurate

    Trainable Projected Gradient Method for Robust Fine-tuning

    Full text link
    Recent studies on transfer learning have shown that selectively fine-tuning a subset of layers or customizing different learning rates for each layer can greatly improve robustness to out-of-distribution (OOD) data and retain generalization capability in the pre-trained models. However, most of these methods employ manually crafted heuristics or expensive hyper-parameter searches, which prevent them from scaling up to large datasets and neural networks. To solve this problem, we propose Trainable Projected Gradient Method (TPGM) to automatically learn the constraint imposed for each layer for a fine-grained fine-tuning regularization. This is motivated by formulating fine-tuning as a bi-level constrained optimization problem. Specifically, TPGM maintains a set of projection radii, i.e., distance constraints between the fine-tuned model and the pre-trained model, for each layer, and enforces them through weight projections. To learn the constraints, we propose a bi-level optimization to automatically learn the best set of projection radii in an end-to-end manner. Theoretically, we show that the bi-level optimization formulation could explain the regularization capability of TPGM. Empirically, with little hyper-parameter search cost, TPGM outperforms existing fine-tuning methods in OOD performance while matching the best in-distribution (ID) performance. For example, when fine-tuned on DomainNet-Real and ImageNet, compared to vanilla fine-tuning, TPGM shows 22%22\% and 10%10\% relative OOD improvement respectively on their sketch counterparts. Code is available at \url{https://github.com/PotatoTian/TPGM}.Comment: Accepted to CVPR202
    corecore